Learning Optimal Decision Trees Under Memory Constraints

نویسندگان

چکیده

Existing algorithms for learning optimal decision trees can be put into two categories: based on the use of Mixed Integer Programming (MIP) solvers and dynamic programming (DP) itemsets. While DP are fastest, their main disadvantage compared to MIP-based approaches is that amount memory these may require find an solution not bounded. Consequently, some datasets only executed machines with large amounts memory. In this paper, we propose first DP-based algorithm operates under constraints. Core contributions work include: (1) strategies freeing when too much used by algorithm; (2) effective approach recovering tree parts freed. Our experiments demonstrate a favorable trade-off between constraints run times our algorithm.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Random Subspace with Trees for Feature Selection Under Memory Constraints

Dealing with datasets of very high dimension is a major challenge in machine learning. In this paper, we consider the problem of feature selection in applications where the memory is not large enough to contain all features. In this setting, we propose a novel tree-based feature selection approach that builds a sequence of randomized trees on small subsamples of variables mixing both variables ...

متن کامل

Optimal Decision Trees

We propose an Extreme Point Tabu Search (EPTS) algorithm that constructs globally optimal decision trees for classiication problems. Typically, decision tree algorithms are greedy. They optimize the misclassiication error of each decision sequentially. Our non-greedy approach minimizes the misclassiication error of all the decisions in the tree concurrently. Using Global Tree Optimization (GTO)...

متن کامل

Learning Random Log-Depth Decision Trees under the Uniform Distribution

We consider three natural models of random log-depth decision trees. We give an efficient algorithm that for each of these models learns—as a decision tree—all but an inverse polynomial fraction of such trees using only uniformly distributed random examples.

متن کامل

Learning Decision Trees Using the Area Under the ROC Curve

ROC analysis is increasingly being recognised as an important tool for evaluation and comparison of classifiers when the operating characteristics (i.e. class distribution and cost parameters) are not known at training time. Usually, each classifier is characterised by its estimated true and false positive rates and is represented by a single point in the ROC diagram. In this paper, we show how...

متن کامل

Memory constraints affect statistical learning; statistical learning affects memory constraints

We present evidence that successful chunk formation during a statistical learning task depends on how well the perceiver is able to parse the information that is presented between successive presentations of the to-be-learned chunk. First, we show that learners acquire a chunk better when the surrounding information is also chunk-able in a visual statistical learning task. We tested three proce...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Lecture Notes in Computer Science

سال: 2023

ISSN: ['1611-3349', '0302-9743']

DOI: https://doi.org/10.1007/978-3-031-26419-1_24